In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
In [1]:
# Load pickled data
import pickle
from keras.datasets import cifar10
from sklearn.model_selection import train_test_split
# TODO: Fill this in based on where you saved the training and testing data
#training_file = "traffic-signs-data/train.p"
#validation_file = "traffic-signs-data/valid.p"
#testing_file = "traffic-signs-data/test.p"
#with open(training_file, mode='rb') as f:
# train = pickle.load(f)
#with open(validation_file, mode='rb') as f:
# valid = pickle.load(f)
#with open(testing_file, mode='rb') as f:
# test = pickle.load(f)
#X_train, y_train = train['features'], train['labels']
#X_valid, y_valid = valid['features'], valid['labels']
#X_test, y_test = test['features'], test['labels']
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2, random_state=0)
# y_train.shape is 2d, (50000, 1). While Keras is smart enough to handle this
# it's a good idea to flatten the array.
y_train = y_train.reshape(-1)
y_test = y_test.reshape(-1)
y_valid = y_valid.reshape(-1)
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGESComplete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
In [3]:
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(y_valid))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
In [4]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
label_name = ["" for x in range(43)]
label_name[0] = "Speed limit (20km/h)"
label_name[1] = "Speed limit (30km/h)"
label_name[2] = "Speed limit (50km/h)"
label_name[3] = "Speed limit (60km/h)"
label_name[4] = "Speed limit (70km/h)"
label_name[5] = "Speed limit (80km/h)"
label_name[6] = "End of speed limit (80km/h)"
label_name[7] = "Speed limit (100km/h)"
label_name[8] = "Speed limit (120km/h)"
label_name[9] = "No passing"
label_name[10] = "No passing for vehicles over 3.5 metric tons"
label_name[11] = "Right-of-way at the next intersection"
label_name[12] = "Priority road"
label_name[13] = "Yield"
label_name[14] = "Stop"
label_name[15] = "No vehicles"
label_name[16] = "Vehicles over 3.5 metric tons prohibited"
label_name[17] = "No entry"
label_name[18] = "General caution"
label_name[19] = "Dangerous curve to the left"
label_name[20] = "Dangerous curve to the right"
label_name[21] = "Double curve"
label_name[22] = "Bumpy road"
label_name[23] = "Slippery road"
label_name[24] = "Road narrows on the right"
label_name[25] = "Road work"
label_name[26] = "Traffic signals"
label_name[27] = "Pedestrians"
label_name[28] = "Children crossing"
label_name[29] = "Bicycles crossing"
label_name[30] = "Beware of ice/snow"
label_name[31] = "Wild animals crossing"
label_name[32] = "End of all speed and passing limits"
label_name[33] = "Turn right ahead"
label_name[34] = "Turn left ahead"
label_name[35] = "Ahead only"
label_name[36] = "Go straight or right"
label_name[37] = "Go straight or left"
label_name[38] = "Keep right"
label_name[39] = "Keep left"
label_name[40] = "Roundabout mandatory"
label_name[41] = "End of no passing"
label_name[42] = "End of no passing by vehicles over 3.5 metric tons"
unique_label = set(y_valid)
maximum_traffic_signs_to_print = 20
def print_overite(title):
print(title, end='\r')
def fast_draw_all_traffic_signs_in_different_images():
size_dataset = len(y_valid)
for label in unique_label:
plt.figure(figsize=(16,0.8))
traffic_sign_index = 0
number_of_signs_printed = 0
for i in range(size_dataset):
#title = "Label: " + str(label) + "/" + str(len(unique_label) - 1) + " processing " + str(i) + "/" + str(size_dataset - 1)
#print_overite(title)
if (y_valid[i] == label):
traffic_signs = plt.subplot(1, maximum_traffic_signs_to_print, traffic_sign_index+1)
traffic_signs.imshow(X_valid[i], interpolation='nearest')
traffic_signs.axis('off')
traffic_sign_index += 1
number_of_signs_printed += 1
if (number_of_signs_printed == maximum_traffic_signs_to_print):
break
print(str(label+1) + "/" + str(len(unique_label)) + " - " + label_name[label])
plt.show()
def slow_draw_all_traffic_signs_in_one_image():
number_of_labels = len(unique_label)
size = 0.8
width = size * maximum_traffic_signs_to_print
height = size * number_of_labels
size_dataset = len(y_valid)
print("Number of labels: " + str(number_of_labels))
print("Total size of the image w:" + str(width) + " h:" + str(height))
print("Total size of the dataset: " + str(size_dataset))
print("Creating subplots, this might take a long time...")
fig, traffic_signs = plt.subplots(number_of_labels, maximum_traffic_signs_to_print, figsize=(width,height))
for label in range(number_of_labels):
traffic_sign_index = 0
number_of_signs_printed = 0
for i in range(size_dataset):
title = "Processing label: " + str(label) + "/" + str(number_of_labels - 1)
print_overite(title)
if (y_valid[i] == label):
traffic_signs[label][traffic_sign_index].imshow(X_valid[i], interpolation='nearest')
traffic_signs[label][traffic_sign_index].axis('off')
traffic_sign_index += 1
number_of_signs_printed += 1
if (number_of_signs_printed == maximum_traffic_signs_to_print):
break
print()
print("Painting...")
fast_draw_all_traffic_signs_in_different_images()
#slow_draw_all_traffic_signs_in_one_image()
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sigwww.omgubuntu.co.uk/page/3n Dataset.
There are various aspects to consider when thinking about this problem:
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
In [5]:
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
#X_train, y_train = train['features'], train['labels']
#X_valid, y_valid = valid['features'], valid['labels']
#X_test, y_test = test['features'], test['labels']
import cv2
import tensorflow as tf
import numpy as np
def preprocess(x):
x = [cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) for image in x]
return np.reshape(x, (-1, 32, 32, 1))
X_train = preprocess(X_train)
X_valid = preprocess(X_valid)
X_test = preprocess(X_test)
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
In [6]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = n_classes.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the test set but low accuracy on the validation set implies overfitting.
In [7]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Constants
EPOCHS = 10
BATCH_SIZE = 128
# Features and Labels
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
#Training Pipeline
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
#Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#Train the Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
print("EPOCH {}".format(i+1) + ", accuracy: {:.3f}".format(evaluate(X_valid, y_valid)))
saver.save(sess, './lenet')
print("Model saved")
In [8]:
#Evaluate accuracy of the system
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def evaluate_accuracy(kind, x, y):
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
accuracy = evaluate(x, y)
print(kind + " Accuracy = {:.3f}".format(accuracy))
evaluate_accuracy("Validation", X_valid, y_valid)
evaluate_accuracy("Training", X_train, y_train)
evaluate_accuracy("Test", X_test, y_test)
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
In [ ]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpimg
#Images from http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset#Downloads
#newImageFileNames = [
# "new-images/00000.ppm",
# "new-images/00001.ppm",
# "new-images/00002.ppm",
# "new-images/00003.ppm",
# "new-images/00004.ppm"
#]
#y_newImages = [16, 1, 38, 33, 11]
#Images from Google search https://www.google.co.uk/search?q=german+road+signs
newImageFileNames = [
"random-web-images/label_1.jpg",
"random-web-images/label_17.jpg",
"random-web-images/label_18.jpg",
"random-web-images/label_25.jpg",
"random-web-images/label_28.jpg"
]
y_newImages = [1, 17, 18, 25, 28]
newImages = [mpimg.imread(newImageFileName) for newImageFileName in newImageFileNames]
def displayImages(images):
size = len(images)
for i in range(size):
image = plt.subplot(1, size, i+1)
image.imshow(images[i])
displayImages(newImages)
In [ ]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
#Pre-Process images
newImages = [cv2.resize(newImage, (32, 32)) for newImage in newImages]
X_newImages = preprocess(newImages).astype(np.float32)
def predict_signs(X):
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax = tf.nn.softmax(logits)
label_predictions = sess.run(softmax, feed_dict={x: X})
return [np.argmax(label_prediction) for label_prediction in label_predictions]
predicted_labels = predict_signs(X_newImages)
print("Predicted labels: " + str(predicted_labels))
print("Correct labels: " + str(y_newImages))
In [ ]:
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
def ratio_correct_matches(predicted_labels, correct_labels):
number_of_correct_matches = 0
number_of_labels = len(predicted_labels)
for i in range(number_of_labels):
if (predicted_labels[i] == correct_labels[i]):
number_of_correct_matches += 1
return number_of_correct_matches * 100 / number_of_labels
ratio_success = ratio_correct_matches(predicted_labels, y_newImages)
print("Accuracy is " + str(ratio_success) + "%")
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
In [ ]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
def softmax_top_probabilities(X, number_of_probabilities):
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax = tf.nn.softmax(logits)
softmax_probabilities = sess.run(softmax, feed_dict={x: X})
top_probabilities = tf.nn.top_k(softmax_probabilities, number_of_probabilities)
return sess.run(top_probabilities)
top_probabilities = softmax_top_probabilities(X_newImages, 5)
for i in range(len(X_newImages)):
print("Top five softmax probabilities for image number " + str(i+1))
print(top_probabilities.values[i])
print("-> top five labels: " + str(top_probabilities.indices[i]))
print()
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the IPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
In [ ]:
#Traffic sign General Caution (label 18) analysis
problematic_label = 18
general_caution = []
general_caution_label = []
for i in range(len(y_test)):
if (y_test[i] == problematic_label):
general_caution.append(X_test[i])
general_caution_label.append(problematic_label)
evaluate_accuracy("General Caution", general_caution, general_caution_label)